预训练的语言模型(PLM)通常会利用单语和多语言数据集的优势,该数据集可以在线免费获得,以在部署到特定任务中之前获取一般或混合域知识。最近提出了超大型PLM(XLPLM),以声称对较小尺寸的PLM(例如机器翻译(MT)任务)声称最高性能。这些XLPLM包括Meta-AI的WMT21密度24宽-EN-X和NLLB。 \ textIt {在这项工作中,我们检查XLPLM是否绝对优于较小尺寸的PLM,在针对特定域的MTS中进行微调。}我们使用了不同大小的两个不同的内域数据:商业自动化内部数据和\ textbf {临床}在WMT2022上共享了Clinspen2022挑战的任务数据。我们选择受欢迎的玛丽安·赫尔辛基(Marian Helsinki)作为较小尺寸的PLM和来自Meta-AI的两个大型大型转换器作为XLPLM。我们的实验研究表明,1)在较小尺寸的内域商业汽车数据上,XLPLM WMT21密度24宽24宽-EN-X确实显示出使用S \ TextSc {acre} BLEU和HLEU指标的评估得分要好得多。玛丽安(Marian),即使其得分提高率低于微调后的玛丽安(Marian); 2)在相对较大尺寸的精心准备的临床数据微调上,XLPLM NLLB \ textbf {倾向于失去}其优于较小尺寸的Marian在两个子任务(临床术语和本体概念)上使用Clinspen提供的指标Meteor,Meteor,Marian的优势。 Comet和Rouge-L,并且在所有指标上完全输给了Marian,包括S \ textsc {acre} bleu and Bleu; 3)\ textbf {指标并不总是同意}在相同的任务上使用相同的模型输出相互同意。
translated by 谷歌翻译
来自人类翻译(HT)和机器翻译(MT)研究人员的观点,翻译质量评估(TQE)是一个必不可少的任务。翻译服务提供商(TSP)必须提供大量翻译,满足客户规范,在紧张的时间框架和成本中具有苛刻的质量水平的严厉约束。 MT研究人员努力使其型号更好,这也需要可靠的质量评估。虽然自动化机器翻译评估(MTE)指标和质量估算(QE)工具广泛可用且易于访问,但现有的自动化工具不够好,并且来自专业翻译人员(HAP)的人为评估通常被选为金标准\ CITE {Han-Etal-2021-TQA}。然而,人类评估通常被指控具有低可靠性和协议。这是由主观性或统计造成的吗?如何避免待检查的整个文本,从成本和效率的角度来看,以及转换文本的最佳样本大小是什么,从而可靠地估计整个材料的翻译质量?这项工作执行了这种激励的研究,以正确估计置信区间\ Cite {Brown_Etal2001Interval},具体取决于翻译文本的样本大小,例如,例如:单词或句子的数量,需要在TQE工作流程上处理,以实现对整体翻译质量的自信和可靠的评估。我们申请这项工作的方法来自伯努利统计分布建模(BSDM)和蒙特卡罗采样分析(MCSA)。
translated by 谷歌翻译
人类评估一直昂贵,而研究人员则努力信任自动指标。为了解决这个问题,我们建议通过采取预先接受训练的语言模型(PLM)和有限的人类标记分数来定制传统指标。我们首先重新介绍Hlepor度量因子,然后是我们开发的Python版本(移植),这实现了Hlepor度量中的加权参数的自动调整。然后我们介绍了使用Optuna超参数优化框架的定制Hlepor(Cushlepor),以便更好地协议为预先接受训练的语言模型(使用Labse),这是关于Cushlepor的确切MT语言对。我们还在英语 - 德语和汉英语言对基于MQM和PSQM框架的专业人体评估数据进行了优化的曲位波。实验研究表明,Cushlepor可以提升Hlepor对PLMS的更好的表演,如Labse,如Labse的更好的成本,以及更好的人类评估协议,包括MQM和PSQM得分,并且比Bleu(AT \ URL的数据提供更好的表演(HTTPS:// github.com/poethan/cushlepor})。官方结果表明,我们的提交赢得了三种语言对,包括\ textbf {英语 - 德语}和\ textbf {中文 - 英文}通过cushlepor(lm)和\ textbf {英语 - 俄语}上\ textit {通过hlepor ted}域。
translated by 谷歌翻译
This short report reviews the current state of the research and methodology on theoretical and practical aspects of Artificial Neural Networks (ANN). It was prepared to gather state-of-the-art knowledge needed to construct complex, hypercomplex and fuzzy neural networks. The report reflects the individual interests of the authors and, by now means, cannot be treated as a comprehensive review of the ANN discipline. Considering the fast development of this field, it is currently impossible to do a detailed review of a considerable number of pages. The report is an outcome of the Project 'The Strategic Research Partnership for the mathematical aspects of complex, hypercomplex and fuzzy neural networks' meeting at the University of Warmia and Mazury in Olsztyn, Poland, organized in September 2022.
translated by 谷歌翻译
Current image generation models struggle to reliably produce well-formed visual text. In this paper, we investigate a key contributing factor: popular text-to-image models lack character-level input features, making it much harder to predict a word's visual makeup as a series of glyphs. To quantify the extent of this effect, we conduct a series of controlled experiments comparing character-aware vs. character-blind text encoders. In the text-only domain, we find that character-aware models provide large gains on a novel spelling task (WikiSpell). Transferring these learnings onto the visual domain, we train a suite of image generation models, and show that character-aware variants outperform their character-blind counterparts across a range of novel text rendering tasks (our DrawText benchmark). Our models set a much higher state-of-the-art on visual spelling, with 30+ point accuracy gains over competitors on rare words, despite training on far fewer examples.
translated by 谷歌翻译
We apply topological data analysis (TDA) to speech classification problems and to the introspection of a pretrained speech model, HuBERT. To this end, we introduce a number of topological and algebraic features derived from Transformer attention maps and embeddings. We show that a simple linear classifier built on top of such features outperforms a fine-tuned classification head. In particular, we achieve an improvement of about $9\%$ accuracy and $5\%$ ERR on four common datasets; on CREMA-D, the proposed feature set reaches a new state of the art performance with accuracy $80.155$. We also show that topological features are able to reveal functional roles of speech Transformer heads; e.g., we find the heads capable to distinguish between pairs of sample sources (natural/synthetic) or voices without any downstream fine-tuning. Our results demonstrate that TDA is a promising new approach for speech analysis, especially for tasks that require structural prediction.
translated by 谷歌翻译
Recent work has demonstrated that natural language processing techniques can support consumer protection by automatically detecting unfair clauses in the Terms of Service (ToS) Agreement. This work demonstrates that transformer-based ToS analysis systems are vulnerable to adversarial attacks. We conduct experiments attacking an unfair-clause detector with universal adversarial triggers. Experiments show that a minor perturbation of the text can considerably reduce the detection performance. Moreover, to measure the detectability of the triggers, we conduct a detailed human evaluation study by collecting both answer accuracy and response time from the participants. The results show that the naturalness of the triggers remains key to tricking readers.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
Federated Learning (FL) enables the training of Deep Learning models without centrally collecting possibly sensitive raw data. This paves the way for stronger privacy guarantees when building predictive models. The most used algorithms for FL are parameter-averaging based schemes (e.g., Federated Averaging) that, however, have well known limits: (i) Clients must implement the same model architecture; (ii) Transmitting model weights and model updates implies high communication cost, which scales up with the number of model parameters; (iii) In presence of non-IID data distributions, parameter-averaging aggregation schemes perform poorly due to client model drifts. Federated adaptations of regular Knowledge Distillation (KD) can solve and/or mitigate the weaknesses of parameter-averaging FL algorithms while possibly introducing other trade-offs. In this article, we provide a review of KD-based algorithms tailored for specific FL issues.
translated by 谷歌翻译
Developmental dysplasia of the hip (DDH) is a condition in infants where the femoral head is incorrectly located in the hip joint. We propose a deep learning algorithm for segmenting key structures within ultrasound images, employing this to calculate Femoral Head Coverage (FHC) and provide a screening diagnosis for DDH. To our knowledge, this is the first study to automate FHC calculation for DDH screening. Our algorithm outperforms the international state of the art, agreeing with expert clinicians on 89.8% of our test images.
translated by 谷歌翻译